Skip to main content
Open this photo in gallery:

Jordan Montgomery #47 of the New York Yankees in action against the Baltimore Orioles at Yankee Stadium on Sept. 12, 2020 in New York City.Jim McIsaac/Getty Images

Artificial intelligence continues to be newsworthy and certainly drives some stock market behaviour. Consider that senior executives at Standard & Poor’s 500 companies mentioned the terms AI or artificial intelligence an average of 3.7 times per call with analysts late in the second quarter of 2023, more than double the 1.8 mentions per call at the same period the previous quarter. Or that Upstart Holdings Inc., a stock with AI focus, surged by more 400 per cent in a couple of months to about US$72 a share in August, before retreating to around US$27 recently.

Employees are also clearly worried about AI threatening their jobs. It was a major issue in the five-month-long Writers Guild of America strike, for example.

But amid the excitement and dread, it is useful to understand the limits of AI, at least as it stands today. As a recent article in the Harvard Business Review puts it: “Artificial intelligences are prediction machines. They can tell you the probability it will rain today, but they cannot tell you whether or not you should pack an umbrella. That’s because the umbrella decision requires more than just prediction.”

The decision requires a judgment, which reflects individual preferences and experiences. When the weather forecast gives a 10 per cent chance of rain, some people will rush for their umbrellas, while others stroll without a worry. It all comes down to personal preferences.

We see this in the investment world with individual risk tolerances. Some people are risk-seeking enough to invest in cryptocurrency stocks or small early-stage tech companies, some are not. The probability of loss (or gain) from those investments is the same for all investors. Some investors have a preference and tolerance for higher probabilities of loss than others. This is why determining an investor’s risk tolerance isn’t a straightforward exercise.

The interplay of prediction and judgment can be seen now in the playoffs in Major League Baseball. Jordan Montgomery, the star pitcher for the Texas Rangers, has won three of his starts in the playoffs so far. The New York Yankees traded him last year because they didn’t trust Montgomery to win big postseason games.

Why did the Yankees do that? Because the data – the prediction machines – showed that starting pitchers are much less effective when facing batters for the third time in a game. As a result, starting pitchers such as Montgomery are often pulled early and replaced by relief pitchers.

But this deprives starting pitchers of the opportunity to develop the mental confidence and a toolbox of deceptive pitches that marked the big-name starting pitchers of the past. It also tires out the relief pitching staff, potentially costing games down the road.

The New York Yankees didn’t make the playoffs in 2023. But this is a league-wide issue that also affected the 2023 Blue Jays, whose starting pitchers averaged five to six innings per game. Of course, starters can’t always make it to late innings because of fatigue or ineffectiveness in a particular game. The key is to make a judgment whether a pitcher can figure his way out of a hole or not.

In baseball, the first inning often challenges even the best pitchers. Take Tom Seaver, a star for 20 seasons, starting in 1967. He had a 3.75 ERA (earned run average) in the first inning. But he got stronger as games went on, boasting a 2.75 ERA in late innings. The same goes for Fernando Valenzuela, a standout in the 1980s and 1990s, who had a 4.25 ERA in the first inning but an impressive 2.19 ERA in the ninth. Seaver and Valenzuela both helped win World Series for their teams.

There are limits to the prediction machine. It doesn’t factor in individual human preferences or experiences. Nor does it account for learning, adapting or adjusting on the fly. The batter that the pitcher faces in the later innings of a game isn’t the same batter faced in the early innings, nor is the pitcher the same. The prediction machine cannot – yet – account for that.

These limits of data and the prediction machine can have costly implications for companies and investors. In February, 2021, the tech-based real estate company Zillow Group Inc. confidently touted its AI solution to value homes, and used it to make cash offers to purchase properties. By November, 2021, however, Zillow had an abrupt about-face. It shut down the service and took a US$304-million inventory writedown due to recently buying homes for prices higher than it could realize by reselling them. The company’s stock plunged and Zillow cut 25 per cent of its staff.

All of this is not to downplay the significance of AI or its impact on business and investors now and for years to come. Analyzing reams of data and providing empirically testable conclusions is hugely valuable and a time saver.

But AI cannot, and should not, remove the need for human judgment. AI may make the right decisions based on the available data, but it might also lack the empathy that needs to be part of those decisions and how they are delivered. In another Harvard Business Review Article, AI Isn’t Ready to Make Unsupervised Decisions, the authors state: “AI can help with providing decision-making points, but humans must still be involved in making that decision – ultimately, it needs to be augmented intelligence instead of pure artificial intelligence.”

As the entrepreneur Gary Vaynerchuk put it: “If content is king, then context is God.” AI is on its way to mastering content; it has a long way to go to master context.

Are you a young Canadian with money on your mind? To set yourself up for success and steer clear of costly mistakes, listen to our award-winning Stress Test podcast.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe